3D点云可以灵活地表示连续表面,可用于各种应用;但是,缺乏结构信息使点云识别具有挑战性。最近的边缘感知方法主要使用边缘信息作为描述局部结构以促进学习的额外功能。尽管这些方法表明,将边缘纳入网络设计是有益的,但它们通常缺乏解释性,使用户想知道边缘如何有所帮助。为了阐明这一问题,在这项研究中,我们提出了以可解释方式处理边缘的扩散单元(DU),同时提供了不错的改进。我们的方法可以通过三种方式解释。首先,我们从理论上表明,DU学会了执行任务呈纤维边缘的增强和抑制作用。其次,我们通过实验观察并验证边缘增强和抑制行为。第三,我们从经验上证明,这种行为有助于提高绩效。在具有挑战性的基准上进行的广泛实验验证了DU在可解释性和绩效增长方面的优势。具体而言,我们的方法使用S3DIS使用Shapenet零件和场景分割来实现对象零件分割的最新性能。我们的源代码将在https://github.com/martianxiu/diffusionunit上发布。
translated by 谷歌翻译
由于缺乏连接性信息,对局部表面几何形状进行建模在3D点云的理解中具有挑战性。大多数先前的作品使用各种卷积操作模拟本地几何形状。我们观察到,卷积可以等效地分解为局部和全球成分的加权组合。通过这种观察,我们明确地将这两个组件解散了,以便可以增强局部的组件并促进局部表面几何形状的学习。具体而言,我们提出了Laplacian单元(LU),这是一个简单而有效的建筑单元,可以增强局部几何学的学习。广泛的实验表明,配备有LU的网络在典型的云理解任务上实现了竞争性或卓越的性能。此外,通过建立平均曲率流之间的连接,基于曲率的LU进行了进一步研究,以解释LU的自适应平滑和锐化效果。代码将可用。
translated by 谷歌翻译
由于缺乏连接性信息,即边缘,学习点云是具有挑战性的。尽管现有的边缘感知方法可以通过建模边缘来改善性能,但边缘如何促进改进尚不清楚。在这项研究中,我们提出了一种自动学习以增强/抑制边缘的方法,同时保持其工作机制清晰。首先,我们从理论上弄清楚边缘增强/抑制作用是如何工作的。其次,我们通过实验验证边缘增强/抑制行为。第三,我们从经验上表明这种行为可以提高性能。通常,我们观察到所提出的方法在点云分类和细分任务中实现了竞争性能。
translated by 谷歌翻译
This study demonstrates the feasibility of point cloud-based proactive link quality prediction for millimeter-wave (mmWave) communications. Image-based methods to quantitatively and deterministically predict future received signal strength using machine learning from time series of depth images to mitigate the human body line-of-sight (LOS) path blockage in mmWave communications have been proposed. However, image-based methods have been limited in applicable environments because camera images may contain private information. Thus, this study demonstrates the feasibility of using point clouds obtained from light detection and ranging (LiDAR) for the mmWave link quality prediction. Point clouds represent three-dimensional (3D) spaces as a set of points and are sparser and less likely to contain sensitive information than camera images. Additionally, point clouds provide 3D position and motion information, which is necessary for understanding the radio propagation environment involving pedestrians. This study designs the mmWave link quality prediction method and conducts two experimental evaluations using different types of point clouds obtained from LiDAR and depth cameras, as well as different numerical indicators of link quality, received signal strength and throughput. Based on these experiments, our proposed method can predict future large attenuation of mmWave link quality due to LOS blockage by human bodies, therefore our point cloud-based method can be an alternative to image-based methods.
translated by 谷歌翻译
Advanced visual localization techniques encompass image retrieval challenges and 6 Degree-of-Freedom (DoF) camera pose estimation, such as hierarchical localization. Thus, they must extract global and local features from input images. Previous methods have achieved this through resource-intensive or accuracy-reducing means, such as combinatorial pipelines or multi-task distillation. In this study, we present a novel method called SuperGF, which effectively unifies local and global features for visual localization, leading to a higher trade-off between localization accuracy and computational efficiency. Specifically, SuperGF is a transformer-based aggregation model that operates directly on image-matching-specific local features and generates global features for retrieval. We conduct experimental evaluations of our method in terms of both accuracy and efficiency, demonstrating its advantages over other methods. We also provide implementations of SuperGF using various types of local features, including dense and sparse learning-based or hand-crafted descriptors.
translated by 谷歌翻译
Community Question Answering (CQA) sites have spread and multiplied significantly in recent years. Sites like Reddit, Quora, and Stack Exchange are becoming popular amongst people interested in finding answers to diverse questions. One practical way of finding such answers is automatically predicting the best candidate given existing answers and comments. Many studies were conducted on answer prediction in CQA but with limited focus on using the background information of the questionnaires. We address this limitation using a novel method for predicting the best answers using the questioner's background information and other features, such as the textual content or the relationships with other participants. Our answer classification model was trained using the Stack Exchange dataset and validated using the Area Under the Curve (AUC) metric. The experimental results show that the proposed method complements previous methods by pointing out the importance of the relationships between users, particularly throughout the level of involvement in different communities on Stack Exchange. Furthermore, we point out that there is little overlap between user-relation information and the information represented by the shallow text features and the meta-features, such as time differences.
translated by 谷歌翻译
Sequence transducers, such as the RNN-T and the Conformer-T, are one of the most promising models of end-to-end speech recognition, especially in streaming scenarios where both latency and accuracy are important. Although various methods, such as alignment-restricted training and FastEmit, have been studied to reduce the latency, latency reduction is often accompanied with a significant degradation in accuracy. We argue that this suboptimal performance might be caused because none of the prior methods explicitly model and reduce the latency. In this paper, we propose a new training method to explicitly model and reduce the latency of sequence transducer models. First, we define the expected latency at each diagonal line on the lattice, and show that its gradient can be computed efficiently within the forward-backward algorithm. Then we augment the transducer loss with this expected latency, so that an optimal trade-off between latency and accuracy is achieved. Experimental results on the WSJ dataset show that the proposed minimum latency training reduces the latency of causal Conformer-T from 220 ms to 27 ms within a WER degradation of 0.7%, and outperforms conventional alignment-restricted training (110 ms) and FastEmit (67 ms) methods.
translated by 谷歌翻译
本文提出了一种用于拆分计算的神经体系结构搜索(NAS)方法。拆分计算是一种新兴的机器学习推理技术,可解决在物联网系统中部署深度学习的隐私和延迟挑战。在拆分计算中,神经网络模型通过网络使用Edge服务器和IoT设备进行了分离和合作处理。因此,神经网络模型的体系结构显着影响通信有效载荷大小,模型准确性和计算负载。在本文中,我们解决了优化神经网络体系结构以进行拆分计算的挑战。为此,我们提出了NASC,该NASC共同探讨了最佳模型架构和一个拆分点,以达到延迟需求(即,计算和通信的总延迟较小,都比某个阈值较小)。 NASC采用单发NAS,不需要重复模型培训进行计算高效的体系结构搜索。我们使用硬件(HW) - 基准数据的NAS基础的绩效评估表明,拟议的NASC可以改善``通信潜伏期和模型准确性''的权衡,即,将延迟降低了约40-60%,从基线降低了约40-60%有轻微的精度降解。
translated by 谷歌翻译
开放式对象检测(OSOD)最近引起了广泛的关注。它是在正确检测/分类已知对象的同时检测未知对象。我们首先指出,最近的研究中考虑的OSOD方案,该方案考虑了类似于开放式识别(OSR)的无限种类的未知物体,这是一个基本问题。也就是说,我们无法确定要检测到的内容,而对于这种无限的未知对象,这是检测任务所必需的。这个问题导致了对未知对象检测方法的性能的评估困难。然后,我们介绍了OSOD的新颖方案,该方案仅处理与已知对象共享超级类别的未知对象。它具有许多真实的应用程序,例如检测越来越多的细粒对象。这个新环境摆脱了上述问题和评估困难。此外,由于已知和未知对象之间的视觉相似性,它使检测到未知对象更加现实。我们通过实验结果表明,基于标准检测器类别预测的不确定性的简单方法优于先前设置中测试的当前最新OSOD方法。
translated by 谷歌翻译
图像字幕的当前最新方法采用基于区域的特征,因为它们提供了对象级信息,对于描述图像的内容至关重要;它们通常由对象检测器(例如更快的R-CNN)提取。但是,他们有几个问题,例如缺乏上下文信息,不准确检测的风险以及高计算成本。可以通过使用基于网格的功能来解决前两个。但是,如何提取和融合这两种功能是未知的。本文提出了一种仅使用变压器的神经结构,称为砂砾(基于网格和区域的图像字幕变压器),该构建物有效地利用了两个视觉特征来生成更好的字幕。粒度用基于DITR的方法代替了以前方法中使用的基于CNN的检测器,从而使其更快地计算。此外,它的整体设计仅由变压器组成,可以对模型进行端到端的训练。这种创新的设计和双重视觉功能的集成带来了重大的性能提高。几个图像字幕基准的实验结果表明,砂砾的推论准确性和速度优于先前的方法。
translated by 谷歌翻译